Equi-Tuning: Group Equivariant Fine-Tuning of Pretrained Models

نویسندگان

چکیده

We introduce equi-tuning, a novel fine-tuning method that transforms (potentially non-equivariant) pretrained models into group equivariant while incurring minimum L_2 loss between the feature representations of and models. Large can be equi-tuned for different groups to satisfy needs various downstream tasks. Equi-tuned benefit from both equivariance as an inductive bias semantic priors provide applications equi-tuning on three tasks: image classification, compositional generalization in language, fairness natural language generation (NLG). also group-theoretic definition NLG. The effectiveness this is shown by testing it against standard empirical experimental results using variety models: Alexnet, Resnet, VGG, Densenet classification; RNNs, GRUs, LSTMs generalization; GPT2 test these benchmark datasets across all considered tasks show generality proposed method.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Fine Tuning in Supersymmetric Models

The solution to fine tuning is one of the principal motivations for Beyond the Standard Model (BSM) Studies. However constraints on new physics indicate that many of these BSM models are also fine tuned (although to a much lesser extent). To compare these BSM models it is essential that we have a reliable, quantitative measure of tuning. We review the measures of tuning used in the literature a...

متن کامل

Fine - tuning and the Wilson renormalization group

We use the Wilson renormalization group (RG) formulation to solve the fine-tuning procedure needed in renormalization schemes breaking the gauge symmetry. To illustrate this method we systematically compute the non-invariant couplings of the ultraviolet action of the SU(2) pure Yang-Mills theory at one-loop order.

متن کامل

Measures of Fine Tuning

Fine-tuning criteria are frequently used to place upper limits on the masses of superpartners in supersymmetric extensions of the standard model. However, commonly used prescriptions for quantifying naturalness have some important shortcomings. Motivated by this, we propose new criteria for quantifying fine tuning that can be used to place upper limits on superpartner masses with greater fideli...

متن کامل

Fine-tuning helper Ts

Lethal shock is subverted by crippling a family of G proteins in endothelial cells, according to Korhonen and colleagues on page 411. Bee stings or other allergens can cause anaphylactic shock in severely allergic individuals by triggering mast cell activation and the release of anaphylactic mediators like histamine and platelet activating factor (PAF). At high enough levels, these mediators ca...

متن کامل

The Fine-Tuning Argument

The Fine-Tuning Argument (FTA) is a variant of the Design Argument for the existence of God. In this paper the evidence of fine-tuning is explained and the Fine-Tuning Design Argument for God is presented. Then two objections are covered. The first objection is that fine-tuning can be explained in terms of the existence of multiple universes (the ‘multiverse’) plus the operation of the anthropi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2023

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v37i6.25832